69 research outputs found

    Multi-objective software effort estimation

    Get PDF
    We introduce a bi-objective effort estimation algorithm that combines Confidence Interval Analysis and assessment of Mean Absolute Error. We evaluate our proposed algorithm on three different alternative formulations, baseline comparators and current state-of-the-art effort estimators applied to five real-world datasets from the PROMISE repository, involving 724 different software projects in total. The results reveal that our algorithm outperforms the baseline, state-of-the-art and all three alternative formulations, statistically significantly (p < 0:001) and with large effect size (A12≥ 0:9) over all five datasets. We also provide evidence that our algorithm creates a new state-of-the-art, which lies within currently claimed industrial human-expert-based thresholds, thereby demonstrating that our findings have actionable conclusions for practicing software engineers

    Multi-Objective Software Effort Estimation: A Replication Study

    Get PDF
    Replication studies increase our confidence in previous results when the findings are similar each time, and help mature our knowledge by addressing both internal and external validity aspects. However, these studies are still rare in certain software engineering fields. In this paper, we replicate and extend a previous study, which denotes the current state-of-the-art for multi-objective software effort estimation, namely CoGEE. We investigate the original research questions with an independent implementation and the inclusion of a more robust baseline (LP4EE), carried out by the first author, who was not involved in the original study. Through this replication, we strengthen both the internal and external validity of the original study. We also answer two new research questions investigating the effectiveness of CoGEE by using four additional evolutionary algorithms (i.e., IBEA, MOCell, NSGA-III, SPEA2) and a well-known Java framework for evolutionary computation, namely JMetal (rather than the previously used R software), which allows us to strengthen the external validity of the original study. The results of our replication confirm that: (1) CoGEE outperforms both baseline and state-of-the-art benchmarks statistically significantly (p < 0:001); (2) CoGEE’s multi-objective nature makes it able to reach such a good performance; (3) CoGEE’s estimation errors lie within claimed industrial human-expert-based thresholds. Moreover, our new results show that the effectiveness of CoGEE is generally not limited to nor dependent on the choice of the multi-objective algorithm. Using CoGEE with either NSGA-II, NSGA-III, or MOCell produces human competitive results in less than a minute. The Java version of CoGEE has decreased the running time by over 99.8% with respect to its R counterpart. We have made publicly available the Java code of CoGEE to ease its adoption, as well as, the data used in this study in order to allow for future replication and extension of our work

    Learning From Mistakes: Machine Learning Enhanced Human Expert Effort Estimates

    Get PDF
    In this paper, we introduce a novel approach to predictive modeling for software engineering, named Learning From Mistakes (LFM). The core idea underlying our proposal is to automatically learn from past estimation errors made by human experts, in order to predict the characteristics of their future misestimates, therefore resulting in improved future estimates. We show the feasibility of LFM by investigating whether it is possible to predict the type, severity and magnitude of errors made by human experts when estimating the development effort of software projects, and whether it is possible to use these predictions to enhance future estimations. To this end we conduct a thorough empirical study investigating 402 maintenance and new development industrial software projects. The results of our study reveal that the type, severity and magnitude of errors are all, indeed, predictable. Moreover, we find that by exploiting these predictions, we can obtain significantly better estimates than those provided by random guessing, human experts and traditional machine learners in 31 out of the 36 cases considered (86%), with large and very large effect sizes in the majority of these cases (81%). This empirical evidence opens the door to the development of techniques that use the power of machine learning, coupled with the observation that human errors are predictable, to support engineers in estimation tasks rather than replacing them with machine-provided estimates

    Multimodal Convolutional Neural Networks to Detect Fetal Compromise During Labor and Delivery

    Get PDF
    The gold standard to assess whether a baby is at risk of oxygen deprivation during childbirth, is monitoring continuously the fetal heart rate with cardiotocography (CTG). The aim is to identify babies that could benefit from an emergency operative delivery (e.g., Cesarean section), in order to prevent death or permanent brain injury. The long, dynamic and complex CTG patterns are poorly understood and known to have high false positive and false negative rates. Visual interpretation by clinicians is challenging and reliable accurate fetal monitoring in labor remains an enormous unmet medical need. In this work, we applied deep learning methods to achieve data-driven automated CTG evaluation. Multimodal Convolutional Neural Network (MCNN) and Stacked MCNN models were used to analyze the largest available database of routinely collected CTG and linked clinical data (comprising more than 35000 births). We also assessed in detail the impact of the signal quality on the MCNN performance. On a large hold-out testing set from Oxford (n= 4429 births), MCNN improved the prediction of cord acidemia at birth when compared with Clinical Practice and previous computerized approaches. On two external datasets, MCNN demonstrated better performance compared to current feature extraction-based methods. Our group is the first to apply deep learning for the analysis of CTG. We conclude that MCNN hold potential for the prediction of cord acidemia at birth and further work is warranted. Despite the advances, our deep learning models are currently not suitable for the detection of severe fetal injury in the absence of cord acidemia - a heterogeneous, small, and poorly understood group. We suggest that the most promising way forward are hybrid approaches to CTG interpretation in labor, in which different diagnostic models can estimate the risk for different types of fetal compromise, incorporating clinical knowledge with data-driven analyses

    Linear Programming as a Baseline for Software Effort Estimation

    Get PDF
    Software effort estimation studies still suffer from discordant empirical results (i.e., conclusion instability) mainly due to the lack of rigorous benchmarking methods. So far only one baseline model, namely, Automatically Transformed Linear Model (ATLM), has been proposed yet it has not been extensively assessed. In this article, we propose a novel method based on Linear Programming (dubbed as Linear Programming for Effort Estimation, LP4EE) and carry out a thorough empirical study to evaluate the effectiveness of both LP4EE and ATLM for benchmarking widely used effort estimation techniques. The results of our study confirm the need to benchmark every other proposal against accurate and robust baselines. They also reveal that LP4EE is more accurate than ATLM for 17% of the experiments and more robust than ATLM against different data splits and cross-validation methods for 44% of the cases. These results suggest that using LP4EE as a baseline can help reduce conclusion instability. We make publicly available an open-source implementation of LP4EE in order to facilitate its adoption in future studies

    A New Approach to Distribute MOEA Pareto Front Computation

    Get PDF
    Multi-Objective Evolutionary Algorithms (MOEAs) offer compelling solutions to many real world problems, including software engineering ones. However, their efficiency decreases with the growing size of the problems at hand, hindering their applicability in practice. In this paper we propose a novel master-worker approach to distribute the computation of the Pareto Front (PF) for MOEAs (dubbed MOEA-DPF) and empirically evaluate it on a real-world software project management problem. With respect to previous work, our proposal can be used with any MOEA to tackle multiobjective problems regardless of their formulation/representation. Our results show that MOEA-DPF runs significantly faster (up to 3.1x speed-up using two workers) than its sequential counterpart while maintaining (and even improving) the quality of the PF. We conclude that MOEA-DPF provides an effective and simple solution to speed-up the execution of MOEAs by distributing the PF computation, making them effective for real-world problems

    Use of polyaspartates for the tartaric stabilisation of white and red wines and side effects on wine characteristics

    Get PDF
    Aim: The stabilising efficacy against tartaric precipitations of polyaspartates-based products (PAs), in particular potassium polyaspartate (KPA), was tested with six different wines (three white and three red). Some side effects on wine characteristics (white wine colour stability, wine turbidity and filterability) were also studied. Results and conclusions: All PAs showed good stabilising efficacy against tartaric precipitations according to the cold test. With the same test, the PAs were stable in wine for 1 year of storage, which was the total duration of the study. The dose of 100 mg/L was sufficient to stabilise the tested wines. No differences in filterability were observed in comparison with MTA (metatartaric acid). The hypothesised protective effect against colour browning in white wines was not observed. Significance and impact of the study: The international wine trade requires stable wines. This paper provides information to support wineries in managing the use of KPA, as little information is available to date in the literature on this stabilising additive

    Na+/Ca2+ exchanger isoform 1 takes part to the Ca2+-related prosurvival pathway of SOD1 in primary motor neurons exposed to beta-methylamino-l-alanine

    Get PDF
    Background: The cycad neurotoxin beta-methylamino-l-alanine (L-BMAA), one of the environmental trigger factor for amyotrophic lateral sclerosis/Parkinson-dementia complex (ALS/PDC), may cause neurodegeneration by disrupting organellar Ca2+ homeostasis. Through the activation of Akt/ERK1/2 pathway, the Cu,Zn-superoxide dismutase (SOD1) and its non-metallated form, ApoSOD1, prevent endoplasmic reticulum (ER) stress-induced cell death in motor neurons exposed to L-BMAA. This occurs through the rapid increase of intracellular Ca2+ concentration ([Ca2+]i) in part flowing from the extracellular compartment and in part released from ER. However, the molecular components of this mechanism remain uncharacterized. Methods: By an integrated approach consisting on the use of siRNA strategy, Western blotting, confocal double- labeling immunofluorescence, patch-clamp electrophysiology, and Fura 2-/SBFI-single-cell imaging, we explored in rat motor neuron-enriched cultures the involvement of the plasma membrane proteins Na+/Ca2+ exchanger (NCX) and purinergic P2X7 receptor as well as that of the intracellular cADP-ribose (cADPR) pathway, in the neuroprotective mechanism of SOD1. Results: We showed that SOD1-induced [Ca2+]i rise was prevented neither by A430879, a P2X7 receptor specific antagonist or 8-bromo-cADPR, a cell permeant antagonist of cADP-ribose, but only by the pan inhibitor of NCX, CB-DMB. The same occurred for the ApoSOD1. Confocal double labeling immunofluorescence showed a huge expression of plasmalemmal NCX1 and intracellular NCX3 isoforms. Furthermore, we identified NCX1 reverse mode as the main mechanism responsible for the neuroprotective ER Ca2+ refilling elicited by SOD1 and ApoSOD1 through which they promoted translocation of active Akt in the nuclei of a subset of primary motor neurons. Finally, the activation of NCX1 by the specific agonist CN-PYB2 protected motor neurons from L-BMAA-induced cell death, mimicking the effect of SOD1. Conclusion: Collectively, our data indicate that SOD1 and ApoSOD1 exert their neuroprotective effect by modulating ER Ca2+ content through the activation of NCX1 reverse mode and Akt nuclear translocation in a subset of primary motor neurons. [MediaObject not available: see fulltext.

    Characterisation of Refined Marc Distillates with Alternative Oak Products Using Different Analytical Approaches

    Get PDF
    The use of oak barrel alternatives, including oak chips, oak staves and oak powder, is quite common in the production of spirits obtained from the distillation of vegetal fermented products such as grape pomace. This work explored the use of unconventional wood formats such as peeled and sliced wood. The use of poplar wood was also evaluated to verify its technological uses to produce aged spirits. To this aim, GC-MS analyses were carried out to obtain an aromatic characterisation of experimental distillates treated with these products. Moreover, the same spirits were studied for classification purposes using NMR, NIR and e-nose. A significant change in the original composition of grape pomace distillate due to sorption phenomena was observed; the intensity of this effect was greater for poplar wood. The release of aroma compounds from wood depended both on the toasting level and wood assortment. Higher levels of xylovolatiles, namely, whisky lactone, were measured in samples aged using sliced woods. Both the NIR and NMR analyses highlighted similarities among samples refined with oak tablets, differentiating them from the other wood types. Finally, E-nose seemed to be a promising alternative to spectroscopic methods both for the simplicity of sample preparation and method portability
    • …
    corecore